Do AI Models Also Have Memory Vulnerabilities? Microsoft Warns: Be Cautious of Poisoned Share Buttons
Microsoft warns of a new form of 'AI Prompt Poisoning' attack, where attackers embed hidden instructions in web links to induce AI models to generate biased or misleading content. This attack exploits the 'memory' mechanism of AI models; when users click on the link, malicious prompt words are secretly input into the AI, causing it to execute harmful commands.